44 research outputs found

    First experience in operating the population of the condition databases for the CMS experiment

    Get PDF
    Reliable population of the condition databases is critical for the correct operation of the online selection as well as of the offline reconstruction and analysis of data. We will describe here the system put in place in the CMS experiment to populate the database and make condition data promptly available both online for the high-level trigger and offline for reconstruction. The system, designed for high flexibility to cope with very different data sources, uses POOL-ORA technology in order to store data in an object format that best matches the object oriented paradigm for \texttt{C++} programming language used in the CMS offline software. In order to ensure consistency among the various subdetectors, a dedicated package, PopCon (Populator of Condition Objects), is used to store data online. The data are then automatically streamed to the offline database hence immediately accessible offline worldwide. This mechanism was intensively used during 2008 in the test-runs with cosmic rays. The experience of this first months of operation will be discussed in detail.Comment: 15 pages, submitter to JOP, CHEP0

    Clinical expression of facioscapulohumeral muscular dystrophy in carriers of 1-3 D4Z4 reduced alleles: Experience of the FSHD Italian National Registry

    Get PDF
    OBJECTIVES: Facioscapulohumeral muscular dystrophy type 1 (FSHD1) has been genetically linked to reduced numbers ( 64 8) of D4Z4 repeats at 4q35. Particularly severe FSHD cases, characterised by an infantile onset and presence of additional extra-muscular features, have been associated with the shortest D4Z4 reduced alleles with 1-3 repeats (1-3 DRA). We searched for signs of perinatal onset and evaluated disease outcome through the systematic collection of clinical and anamnestic records of de novo and familial index cases and their relatives, carrying 1-3 DRA. SETTING: Italy. PARTICIPANTS: 66 index cases and 33 relatives carrying 1-3 DRA. OUTCOMES: The clinical examination was performed using the standardised FSHD evaluation form with validated inter-rater reliability. To investigate the earliest signs of disease, we designed the Infantile Anamnestic Questionnaire (IAQ). Comparison of age at onset was performed using the non-parametric Wilcoxon rank-sum or Kruskal-Wallis test. Comparison of the FSHD score was performed using a general linear model and Wald test. Kaplan-Meier survival analysis was used to estimate the age-specific cumulative motor impairment risk. RESULTS: No patients had perinatal onset. Among index cases, 36 (54.5%) showed the first signs by 10 years of age. The large majority of patients with early disease onset (26 out of 36, 72.2%) were de novo; whereas the majority of patients with disease onset after 10 years of age were familial (16, 53.3%). Comparison of the disease severity outcome between index cases with age at onset before and over 10 years of age, failed to detect statistical significance (Wald test p value=0.064). Of 61 index cases, only 17 (27.9%) presented extra-muscular conditions. Relatives carrying 1-3 DRA showed a large clinical variability ranging from healthy subjects, to patients with severe motor impairment. CONCLUSIONS: The size of the D4Z4 allele is not always predictive of severe clinical outcome. The high degree of clinical variability suggests that additional factors contribute to the phenotype complexity

    A Roadmap for HEP Software and Computing R&D for the 2020s

    Get PDF
    Particle physics has an ambitious and broad experimental programme for the coming decades. This programme requires large investments in detector hardware, either to build new facilities and experiments, or to upgrade existing ones. Similarly, it requires commensurate investment in the R&D of software to acquire, manage, process, and analyse the shear amounts of data to be recorded. In planning for the HL-LHC in particular, it is critical that all of the collaborating stakeholders agree on the software goals and priorities, and that the efforts complement each other. In this spirit, this white paper describes the R&D activities required to prepare for this software upgrade.Peer reviewe

    Functional tests of a prototype for the CMS-ATLAS common non-event data handling framework

    No full text
    International audienceSince 2014 the ATLAS and CMS experiments share a common vision on the database infrastructure for the handling of the non-event data in forthcoming LHC runs. The wide commonality in the use cases has allowed to agree on a common overall design solution that is meeting the requirements of both experiments. A first prototype has been completed in 2016 and has been made available to both experiments. The prototype is based on a web service implementing a REST api with a set of functions for the management of conditions data. In this contribution, we describe this prototype architecture and the tests that have been performed within the CMS computing infrastructure, with the aim of validating the support of the main use cases and of suggesting future improvements

    Monitoring of the infrastructure and services used to handle and automatically produce Alignment and Calibration conditions at CMS

    No full text
    The Compact Muon Solenoid (CMS) experiment makes a vast use of alignment and calibration measurements in several crucial workflows in the event selection at the High Level Trigger (HLT), in the processing of the recorded collisions and in the production of simulated events.A suite of services addresses the key requirements for the handling of the alignment and calibration conditions such as recording the status of the experiment and of the ongoing data taking, accepting conditions data updates provided by the detector experts, aggregating and navigating the calibration scenarios, and distributing conditions for consumption by the collaborators. Since a large fraction of such services is critical for the data taking and event filtering in the HLT, a comprehensive monitoring and alarm generating system had to be developed. Such monitoring system has been developed based on the open source industry standard for monitoring and alerting services (Nagios) to monitor the database back-end, the hosting nodes and key heart-beat functionalities for all the services involved.This paper describes the design, implementation and operational experience with the monitoring system developed and deployed at CMS in 2016.The CMS experiment at CERN LHC has a dedicated infrastructure to handle the alignment and calibration data. This infrastructure is composed of several services, which take on various data management tasks required for the consumption of the non-event data (also called as condition data) in the experiment activities. The criticality of these tasks imposes tights requirements for the availability and the reliability of the services executing them. In this scope, a comprehensive monitoring and alarm generating system has been developed. The system has been implemented based on the Nagios open source industry standard for monitoring and alerting services, and monitors the database back-end, the hosting nodes and key heart-beat functionalities for all the services involved. This paper describes the design, implementation and operational experience with the monitoring system developed and deployed at CMS in 2016

    Multi-threaded Object Streaming

    No full text
    The CMS experiment at CERNs Large Hadron Collider in Geneva redesigned the code handling the conditions data during the last years, aiming to increase performance and enhance maintainability. The new design includes a move to serialise all payloads before storing them into the database, allowing the handling of the payloads in external tools independent of a given software release. In this talk we present the results of performance studies done using the serialisation package from the Boost suite as well as serialisation done with the ROOT (v5) tools. Furthermore, as the Boost tools allow parallel (de-)serialisation, we show the performance gains achieved with parallel threads when de-serialising a realistic set of conditions in CMS. Without specific optimisations an overall speed up of a factor of 3-4 was achieved using multi-threaded loading and de-serialisation of our conditions

    Development and operational experience of the web based application to collect, manage, and release the alignment and calibration configurations for data processing at CMS

    Get PDF
    The alignment and calibration workflows at the Compact Muon Solenoid (CMS) experiment are fundamental to provide a high quality physics data and to maintain the design performance of the experiment. To facilitate the operational efforts required by the experiment, the alignment and calibration team has developed and deployed a set of web-based applications to search, navigate and prepare a consistent set of calibrations to be consumed in reconstruction of data for physics, accessible through the Condition DB Browser. The Condition DB Browser hosts also various data management tools, including a vi-sualization tool that allows to easily inspect alignment an calibration contents, an user-defined notification agent for delivering updates on modification to the database, a logging service for the user and the automatic online-to-offline condition uploads. In this paper we report on the operational experience of this web application from 2017 data taking, with focus on new features and tools incorporated during this period

    Development and operational experience of the web based application to collect, manage, and release the alignment and calibration configurations for data processing at CMS

    No full text
    The alignment and calibration workflows at the Compact Muon Solenoid (CMS) experiment are fundamental to provide a high quality physics data and to maintain the design performance of the experiment. To facilitate the operational efforts required by the experiment, the alignment and calibration team has developed and deployed a set of web-based applications to search, navigate and prepare a consistent set of calibrations to be consumed in reconstruction of data for physics, accessible through the Condition DB Browser. The Condition DB Browser hosts also various data management tools, including a visualization tool that allows to easily inspect alignment an calibration contents, an user-defined notification agent for delivering updates on modification to the database, a logging service for the user and the automatic online-to-offline condition uploads. In this paper we report on the operational experience of this web application from 2017 data taking, with focus on new features and tools incorporated during this period.Alignment and calibration workflows in CMS require a significant operational effort, due to the complexity of the systems involved. To serve the variety of condition data management needs of the experiment, the alignment and calibration team has developed and deployed a set of web-based applications. The Condition DB Browser is the main portal to search, navigate and prepare a consistent set of calibrations to be consumed in reconstruction of data for physics. It also hosts various data management tools for the conditions including a customized display for certain calibration sets, an automatic user-defined notification agent for updates, a logging service for the user and the automatic online-to-offline uploads. In this presentation we report on the operational experience of this web application from 2017 data taking, with focus on new features and tools incorporated during this period

    Physics performance and fast turn around: the challenge of calibration and alignment at the CMS experiment during the LHC Run-II

    No full text
    The CMS detector at the Large Hadron Collider (LHC) is a very complex apparatus with more than 70 million acquisition channels. To exploit its full physics potential, a very careful calibra- tion of the various components, together with an optimal knowledge of their position in space, is essential. The CMS Collaboration has set up a powerful infrastructure to allow for the best knowledge of these conditions at any given moment. The quick turnaround of these workflows was proven crucial both for the algorithms performing the online event selection and for the ul- timate resolution of the offline reconstruction of the physics objects. The contribution will report about the design and performance of these workflows during the operations of the 13TeV LHC RunII

    The CMS Condition Database system

    No full text
    The Condition Database plays a key role in the CMS computing infrastructure. The complexity of the detector and the variety of the sub-systems involved are setting tight requirements for handling the Conditions. In the last two years the collaboration has put an effort in the re-design of the Condition Database system, with the aim to improve the scalability and the operability for the data taking starting in 2015. The re-design has focused in simplifying the architecture, using the lessons learned during the operation of the previous data-taking period. In the new system the relational features of the database schema are mainly exploited to handle the metadata ( Tag and Interval of Validity ), allowing for a limited and controlled set of queries. The bulk condition data ( Payloads ) are stored as unstructured binary data, allowing the storage in a single table with a common layout for all of the condition data types. In this presentation, we describe the full architecture of the system, including the services implemented for uploading payloads and the tools for browsing the database. Furthermore, the implementation choices for the core software will be discussed
    corecore